Finetuning script using HuggingFace
https://github.com/2U1/Gemma3-Finetune
I made a code for who wants to use the huggingface version to finetune, and having difficult using some other frameworks like me.
This code only uses huggingface for fine-tuning the 4B, 12B and 27B.
Also, you can set different learning_rate for vision_tower and language_model. ( Also for the projector)
Feedback and issues are welcome!
Can you provide the notebooks for finetuning for LoRA and QLoRA?
Thanks for your interest in my work!
I'm currently developing GRPO code for Gemma 3, and I hope it proves to be useful as well.
@2U1 , That sounds great!. Thank you so much for your continuous interest in the Gemma models. If you required any further assistance please feel free to reach out to us.
Feel free to close this discussion thread if you are satisfied with the above comments.